AI tools aren’t helping developers save time on software testing – but small teams are reaping the rewards
AI is doing little to help developers with the most time-consuming and painful parts of their daily workflows, according to new research.
In a survey commissioned by Rainforest, three-quarters of developers using open source frameworks for test automation said they were using AI to assist with test writing and/or maintenance, both of which are notoriously time-consuming tasks.
However, the productivity benefits of AI tools haven’t quite materialized yet, with teams reporting they actually spend more time on these tasks than those who don’t.
Mike Sonders, head of marketing for Rainforest, said a leading factor behind this could be the inherent complexity of products among organizations who have chosen to adopt AI for development purposes.
Sonders noted that, in these circumstances, teams have “more work than average to keep automated test suites updated” and end up spending more time on general administrative upkeep.
“Maybe AI is saving those teams some from spending even more time on test suite upkeep, and that effect is hiding in the results,” he said.
“But given the large adoption rate of AI for test creation and maintenance among teams who use open source automation frameworks, this scenario seems unlikely. It’s more likely that AI just isn’t currently delivering velocity benefits for these recurring test automation tasks.”
The good news is that the use of AI is benefiting one group of developers – those in smaller, agile teams working with open source.
It’s the smallest teams that are least likely to have formal policies and procedures in place, and therefore the least likely to keep their automated test suites updated, the study noted.
However, small teams using open source testing frameworks who use AI are far more likely to successfully keep their test suites updated and reliable, marking a distinct contrast.
Developers are growing in confidence
Developers are starting to feel more confident about using AI. More than half said their trust in the accuracy of generative AI output and overall security has gone up over the past year.
More than nine-in-ten said they were using open source frameworks, with those that do so spending more time on test creation and maintenance than those using no-code.
This was particularly striking in the case of mid-sized teams, the study found. Those with between 11 and 30 developers where teams using open source spent more than 20 hours on the tasks, compared with only one-in-ten of teams using no-code tools.
The results, Sonders said, don’t necessarily mean that AI is a bad idea for teams using open source frameworks.
“Some solutions are certainly more effective than others, but the results suggest that teams are still trying to find the ones that work. And there are probably areas in which the technology still needs to improve,” he said.
“Our data show that teams using no-code to automate their E2E tests are spending a lot less time on test maintenance tasks. They give themselves more time and resources to dedicate to shipping code.”
Source link